Conversation
There was a problem hiding this comment.
Pull request overview
This PR adjusts the Gunicorn runtime configuration to explicitly set the number of worker processes, with the stated goal of testing whether it improves memory usage.
Changes:
- Add an explicit
workers = 2setting togunicorn.conf.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| @@ -1,5 +1,6 @@ | |||
| bind = 'unix:/var/run/cabotage/nginx.sock' | |||
| backlog = 1024 | |||
| workers = 2 | |||
There was a problem hiding this comment.
The PR description says setting workers to 2 may help memory, but in this repo Gunicorn is started via gunicorn -c gunicorn.conf ... (see Procfile), so without an explicit value Gunicorn would default to 1 worker. Adding workers = 2 will generally increase the number of processes and can increase memory usage (and could also change throughput/latency characteristics). Consider either keeping workers at 1 for the memory experiment, or making it environment-configurable (e.g., via an env var) so you can tune per deployment without code changes.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: be8edfd139
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| @@ -1,5 +1,6 @@ | |||
| bind = 'unix:/var/run/cabotage/nginx.sock' | |||
| backlog = 1024 | |||
| workers = 2 | |||
There was a problem hiding this comment.
Avoid forcing two Gunicorn workers to fix memory pressure
If the cabotage web process is already close to its memory limit, workers = 2 moves it in the wrong direction: Gunicorn will fork a second Django worker on every web start, so request-time heap growth is duplicated even with preload_app = True, and pydotorg/settings/cabotage.py:11-12 also keeps a separate 600-second DB connection per worker. In the exact “help memory” scenario from the PR description, this increases per-instance RSS and connection usage rather than reducing them, making OOMs more likely.
Useful? React with 👍 / 👎.
Description